This is my current best thinking on EA community building after 4 years doing it. Many of my points are grounded in little other than my own experience and so I welcome comments supporting, building on and undermining these points. It’s also possible that what I write is obvious, but it only became obvious to me fairly recently so this post may still be worthwhile.
Key takeaway
Do more effective community building as a graduate student by targeting research colleagues rather than the general student population
My story
During my undergrad at the London School of Economics and Political Science (LSE), I was involved in EA Community Building. I co-organised weekly meetups and guest speaker events, facilitated one Arete Fellowship and then co-directed and facilitated another, and initiated an AI Safety reading group.
After LSE, I came to California to start my PhD in Logic and Philosophy of Science (LPS) at UC Irvine (UCI). My aim was, and still is, to acquire the skills and expertise to set myself up for AI alignment research. Admittedly, it is an unconventional route. The research institute whose work most closely connects with LPS is MIRI’s, and I was relieved to find that two grad students already on the PhD programme (Daniel Herrmann and Josiah Lopez-Wild) are also aspiring to work on AI alignment research and have recently submitted a paper for publication jointly with Scott Garrabrant.
Alongside the PhD, I wanted to continue engaging with the EA community. I thought that my best bet was to help grow UCI’s EA group (EA UCI). To this end, I created a website for EA UCI and discussed community building strategies with EA UCI’s president as well as with external advisors. However, with the pandemic uprooting our plans to promote EA UCI during involvement/welcome fairs, we were unsuccessful in recruiting members.[1] I was beginning to feel bad for the lack of time I was putting into EA UCI, thinking that if I had done more I could have made EA UCI bigger than it was, perhaps like what we had at LSE. However, my thinking changed during an EA retreat where I learnt (primarily through Sydney) that the conventional (i.e. untargeted) university community building strategies have been ineffective in attracting good-fit and talented individuals who might have gone on to do impactful research had university groups done a better job at targeting them. After reflecting on an earlier conversation with Catherine Low and later conversations with EAs in LA County, I dropped my intentions of putting too much effort into helping to grow EA UCI and shifted my focus to fellow students with potential to work on AI Safety research.[2] I think this is a better strategy for people in positions similar to mine for the following reasons:
I am in a community of talented people who have relevant expertise for working on AI safety research so I can encourage more research to be done in this field
I am myself more interested in AI safety so I can benefit by having more conversations about AI safety with others which will increase my likelihood of having an impact in this field
I can spend less of my time on conventional community building, thereby freeing up time to do my own research as well as to promote AI safety research amongst others
The ethical baggage that comes with EA can put people off, so by focusing on AI safety and justifying its importance using a limited number of ethical claims might be more effective in getting people to work on AI safety research[3]
I’m now having more conversations with my LPS colleagues about unaligned AGI (and existential risks more generally), I’m helping to set up regular dinners with UCI grad students working on AI safety (currently 8 of us as of April 2022) for which I secured funding from EA Funds, and I’m spending more of my time outside of LPS courses reading AI safety research. I expect that this strategy will be more effective in channelling people towards AI safety research. Over the course of my 6 years on the PhD programme, I expect to convince 1-2 people to work on AI safety research who otherwise wouldn’t. If I were to achieve this, then I’d probably advocate this strategy to others in a position similar to mine. If I convinced no one, then I’d probably discourage others from using this strategy. I will provide an update on this post every year. In my next post, I lay out some concrete advice on how to go about persuading others to work on AI safety research. The advice is based on Josiah’s account of how he was motivated by Daniel to shift his research focus from philosophy of mathematics to AI alignment.
Updates
March 2023: Against my own advice, I continued co-organising the EA UCI group. We ran one round of the EA intro programme with relative success (around 4 participants and 2 organisers are more engaged in EA as a result). We’re now hosting weekly zero-preparation meetings in which we watch an EA-related video and discuss its content. On average, there’s around 7 attendees in total. I like that this is a low-effort and time-efficient way for me to keep learning about EA with others. Separately, I still co-organise the AI Safety group at UCI. We’ve been experimenting with the setup. After a few months of meetings with around 8 attendees, we decided to split the group into 2 — one monthly ‘intro’ group and another ‘core’ group — to cater to different levels of familiarity. However, attendance for our intro group was low, so we discontinued it. That left us with the core group of 4 PhD students. We meet every week to discuss the latest developments. Once 2 leave with their PhDs (in a couple of months), we’ll likely seek to admit others who want to discuss AI safety with us. Overall, I still think that targeting research colleagues is a better approach, though I haven’t yet seen many opportunities to do this in a productive way and so I’ve mostly been helping general outreach efforts.
Credits
Credits go to Daniel and Jake McKinnon for providing helpful feedback prior to this post’s submission.
That said, I got 14 students from Daniel’s Critical Reasoning course onto the EAVP Introductory Program (albeit with the lure of extra credit) after having presented on EA concepts to Daniel’s 250 students.
Other graduate students might swap out AI safety research for biosecurity research or any other high-priority research area to which they are more suited.
I’m wondering what people think about this one. Its extreme version would be to purge justifications of all ethical claims and to motivate action by appealing instead to the person’s own preferences/interests. This seems more prevalent within the rationalist community but risks alienating those who are primarily motivated by the ethics (e.g. of longtermism).
Community Building for Graduate Students: A Targeted Approach
This is my current best thinking on EA community building after 4 years doing it. Many of my points are grounded in little other than my own experience and so I welcome comments supporting, building on and undermining these points. It’s also possible that what I write is obvious, but it only became obvious to me fairly recently so this post may still be worthwhile.
Key takeaway
Do more effective community building as a graduate student by targeting research colleagues rather than the general student population
My story
During my undergrad at the London School of Economics and Political Science (LSE), I was involved in EA Community Building. I co-organised weekly meetups and guest speaker events, facilitated one Arete Fellowship and then co-directed and facilitated another, and initiated an AI Safety reading group.
After LSE, I came to California to start my PhD in Logic and Philosophy of Science (LPS) at UC Irvine (UCI). My aim was, and still is, to acquire the skills and expertise to set myself up for AI alignment research. Admittedly, it is an unconventional route. The research institute whose work most closely connects with LPS is MIRI’s, and I was relieved to find that two grad students already on the PhD programme (Daniel Herrmann and Josiah Lopez-Wild) are also aspiring to work on AI alignment research and have recently submitted a paper for publication jointly with Scott Garrabrant.
Alongside the PhD, I wanted to continue engaging with the EA community. I thought that my best bet was to help grow UCI’s EA group (EA UCI). To this end, I created a website for EA UCI and discussed community building strategies with EA UCI’s president as well as with external advisors. However, with the pandemic uprooting our plans to promote EA UCI during involvement/welcome fairs, we were unsuccessful in recruiting members.[1] I was beginning to feel bad for the lack of time I was putting into EA UCI, thinking that if I had done more I could have made EA UCI bigger than it was, perhaps like what we had at LSE. However, my thinking changed during an EA retreat where I learnt (primarily through Sydney) that the conventional (i.e. untargeted) university community building strategies have been ineffective in attracting good-fit and talented individuals who might have gone on to do impactful research had university groups done a better job at targeting them. After reflecting on an earlier conversation with Catherine Low and later conversations with EAs in LA County, I dropped my intentions of putting too much effort into helping to grow EA UCI and shifted my focus to fellow students with potential to work on AI Safety research.[2] I think this is a better strategy for people in positions similar to mine for the following reasons:
I am in a community of talented people who have relevant expertise for working on AI safety research so I can encourage more research to be done in this field
I am myself more interested in AI safety so I can benefit by having more conversations about AI safety with others which will increase my likelihood of having an impact in this field
I can spend less of my time on conventional community building, thereby freeing up time to do my own research as well as to promote AI safety research amongst others
The ethical baggage that comes with EA can put people off, so by focusing on AI safety and justifying its importance using a limited number of ethical claims might be more effective in getting people to work on AI safety research[3]
I’m now having more conversations with my LPS colleagues about unaligned AGI (and existential risks more generally), I’m helping to set up regular dinners with UCI grad students working on AI safety (currently 8 of us as of April 2022) for which I secured funding from EA Funds, and I’m spending more of my time outside of LPS courses reading AI safety research. I expect that this strategy will be more effective in channelling people towards AI safety research. Over the course of my 6 years on the PhD programme, I expect to convince 1-2 people to work on AI safety research who otherwise wouldn’t. If I were to achieve this, then I’d probably advocate this strategy to others in a position similar to mine. If I convinced no one, then I’d probably discourage others from using this strategy. I will provide an update on this post every year. In my next post, I lay out some concrete advice on how to go about persuading others to work on AI safety research. The advice is based on Josiah’s account of how he was motivated by Daniel to shift his research focus from philosophy of mathematics to AI alignment.
Updates
March 2023: Against my own advice, I continued co-organising the EA UCI group. We ran one round of the EA intro programme with relative success (around 4 participants and 2 organisers are more engaged in EA as a result). We’re now hosting weekly zero-preparation meetings in which we watch an EA-related video and discuss its content. On average, there’s around 7 attendees in total. I like that this is a low-effort and time-efficient way for me to keep learning about EA with others. Separately, I still co-organise the AI Safety group at UCI. We’ve been experimenting with the setup. After a few months of meetings with around 8 attendees, we decided to split the group into 2 — one monthly ‘intro’ group and another ‘core’ group — to cater to different levels of familiarity. However, attendance for our intro group was low, so we discontinued it. That left us with the core group of 4 PhD students. We meet every week to discuss the latest developments. Once 2 leave with their PhDs (in a couple of months), we’ll likely seek to admit others who want to discuss AI safety with us. Overall, I still think that targeting research colleagues is a better approach, though I haven’t yet seen many opportunities to do this in a productive way and so I’ve mostly been helping general outreach efforts.
Credits
Credits go to Daniel and Jake McKinnon for providing helpful feedback prior to this post’s submission.
That said, I got 14 students from Daniel’s Critical Reasoning course onto the EAVP Introductory Program (albeit with the lure of extra credit) after having presented on EA concepts to Daniel’s 250 students.
Other graduate students might swap out AI safety research for biosecurity research or any other high-priority research area to which they are more suited.
I’m wondering what people think about this one. Its extreme version would be to purge justifications of all ethical claims and to motivate action by appealing instead to the person’s own preferences/interests. This seems more prevalent within the rationalist community but risks alienating those who are primarily motivated by the ethics (e.g. of longtermism).